-
Notifications
You must be signed in to change notification settings - Fork 100
refactor(python-sdk): llm.ThinkingConfig #1847
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Collaborator
Author
Deploying with
|
| Status | Name | Latest Commit | Updated (UTC) |
|---|---|---|---|
| ✅ Deployment successful! View logs |
v2-docs | 7b517b3 | Jan 09 2026, 06:41 PM |
willbakst
reviewed
Jan 8, 2026
d76745d to
165afa7
Compare
This was referenced Jan 9, 2026
willbakst
approved these changes
Jan 9, 2026
Collaborator
Author
Merge activity
|
llm.ThinkingConfig has level (minimal / low / medium / high) and can be unset for auto. It also has encode_thoughts_as_text I dropped include_summary because really it's only Google that let's you enable/disable summaries (openai allows auto / verbose / concise, but concise is only for computer use models apparently). Since we don't have at least 2 providers with consistent semantics, I am just turning them on for Google by default for consistent behavior. If someone wants the ability to disable them for Google we can add that in 2.x once we get a request for it.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.

llm.ThinkingConfig has level (minimal / low / medium / high) and can be
unset for auto. It also has encode_thoughts_as_text
I dropped include_summary because really it's only Google that let's you
enable/disable summaries (openai allows auto / verbose / concise, but
concise is only for computer use models apparently). Since we don't have
at least 2 providers with consistent semantics, I am just turning them
on for Google by default for consistent behavior. If someone wants the
ability to disable them for Google we can add that in 2.x once we get a
request for it.